44 research outputs found
Preliminary analysis of the effects of confirmation bias on software defect density
In cognitive psychology, confirmation bias is defined as the tendency of people to verify hypotheses rather than refuting them. During unit testing software developers should aim to fail their code. However, due to confirmation bias, most defects might be overlooked leading to an increase in software defect density. In this research, we empirically analyze the effect of confirmation bias of software developers on software defect density
Rediscovery Datasets: Connecting Duplicate Reports
The same defect can be rediscovered by multiple clients, causing unplanned
outages and leading to reduced customer satisfaction. In the case of popular
open source software, high volume of defects is reported on a regular basis. A
large number of these reports are actually duplicates / rediscoveries of each
other. Researchers have analyzed the factors related to the content of
duplicate defect reports in the past. However, some of the other potentially
important factors, such as the inter-relationships among duplicate defect
reports, are not readily available in defect tracking systems such as Bugzilla.
This information may speed up bug fixing, enable efficient triaging, improve
customer profiles, etc.
In this paper, we present three defect rediscovery datasets mined from
Bugzilla. The datasets capture data for three groups of open source software
projects: Apache, Eclipse, and KDE. The datasets contain information about
approximately 914 thousands of defect reports over a period of 18 years
(1999-2017) to capture the inter-relationships among duplicate defects. We
believe that sharing these data with the community will help researchers and
practitioners to better understand the nature of defect rediscovery and enhance
the analysis of defect reports
Empirical analyses of the factors affecting confirmation bias and the effects of confirmation bias on software developer/tester performance
Background: During all levels of software testing, the goal should be to fail the code. However, software developers and testers are more likely to choose positive tests rather than negative ones due to the phenomenon called confirmation bias. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them. In the literature, there are theories about the possible effects of confirmation bias on software development and testing. Due to the tendency towards positive tests, most of the software defects remain undetected, which in turn leads to an increase in software defect density.
Aims: In this study, we analyze factors affecting confirmation bias in order to discover methods to circumvent confirmation bias. The factors, we investigate are experience in software development/testing and reasoning skills that can be gained through education. In addition, we analyze the effect of confirmation bias on software developer and tester performance.
Method: In order to measure and quantify confirmation bias levels of software developers/testers, we prepared pen-and-paper and interactive tests based on two tasks from cognitive psychology literature. These tests were conducted on the 36 employees of a large scale telecommunication company in Europe as well as 28 graduate computer engineering students of Bogazici University, resulting in a total of 64 subjects.
We evaluated the outcomes of these tests using the metrics we proposed in addition to some basic methods which we inherited from the cognitive psychology literature.
Results: Results showed that regardless of experience in software development/testing, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Moreover, the results of the analysis to investigate the relationship between code defect density and confirmation bias levels of software developers and testers showed that there is a direct correlation between confirmation bias and defect proneness of the code.
Conclusions: Our findings show that having strong logical reasoning and hypothesis testing skills are differentiating factors in the software developer/tester performance in terms of defect rates. We recommend that companies should focus on improving logical reasoning and hypothesis testing skills of their employees by designing training programs. As future work, we plan to replicate this study in other software development companies. Moreover, we will use confirmation bias metrics in addition to product and process metrics in for software defect prediction. We believe that confirmation bias metrics would improve the prediction performance of learning based defect prediction models which we have been building over a decade
Recommended from our members
Influence of confirmation biases of developers on software quality: an empirical study
The thought processes of people have a significant impact on software quality, as software is designed, developed and tested by people. Cognitive biases, which are defined as patterned deviations of human thought from the laws of logic and mathematics, are a likely cause of software defects. However, there is little empirical evidence to date to substantiate this assertion. In this research, we focus on a specific cognitive bias, confirmation bias, which is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify a hypothesis. Due to this confirmation bias, developers tend to perform unit tests to make their program work rather than to break their code. Therefore, confirmation bias is believed to be one of the factors that lead to an increased software defect density. In this research, we present a metric scheme that explores the impact of developers’ confirmation bias on software defect density. In order to estimate the effectiveness of our metric scheme in the quantification of confirmation bias within the context of software development, we performed an empirical study that addressed the prediction of the defective parts of software. In our empirical study, we used confirmation bias metrics on five datasets obtained from two companies. Our results provide empirical evidence that human thought processes and cognitive aspects deserve further investigation to improve decision making in software development for effective process management and resource allocation
Modeling Human Aspects to Enhance Software Quality Management
The aim of the research is to explore the impact of cognitive biases and social networks in testing and developing software. The research will aim to address two critical areas: i) to predict defective parts of the software, ii) to determine the right person to test the defective parts of the software. Every phase in software development requires analytical problem solving skills. Moreover, using everyday life heuristics instead of laws of logic and mathematics may affect quality of the software product in an undesirable manner. The proposed research aims to understand how mind works in solving problems. People also work in teams in software development that their social interactions in solving a problem may affect the quality of the product. The proposed research also aims to model the social network structure of testers and developers to understand their impact on software quality and defect prediction performance
An analysis of the effects of company culture, education and experience on confirmation bias levels of software developers and testers
In this paper, we present a preliminary analysis of factors such as company culture, education and experience, on confirmation bias levels of software developers and testers. Confirmation bias is defined as the tendency of people to verify their hypotheses rather than refuting them and thus it has an effect on all software testing
Recommended from our members
Confirmation Bias in Software Development and Testing: An Analysis of the Effects of Company Size, Experience and Reasoning Skills
During all levels of software testing, the goal should be to fail the code to discover software defects and hence to increase software quality. However, software developers and testers are more likely to choose positive tests rather than negative ones. This is due to the phenomenon called confirmation bias which is defined as the tendency to verify one’s own hypotheses rather than trying to refute them. In this work, we aimed at identifying the factors that may affect confirmation bias levels of software developers and testers. We have investigated the effects of company size, experience and reasoning skills on bias levels. We prepared pen-and-paper and interactive tests based on two tasks from cognitive psychology literature. During pen-and-paper test, subjects had to test given hypotheses, whereas interactive test required both hypotheses generation and testing. These tests were conducted on employees of one large scale telecommunications company, three small and medium scale software companies and graduate computer engineering students resulting in a total of eighty-eight subjects. Results showed regardless of experience and company size, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Therefore, education and/or training programs that emphasize mathematical reasoning techniques are useful towards production of high quality software. Moreover, in order to investigate the relationship between code defect density and confirmation bias of software developers, we performed an analysis among developers who are involved with a software project in a large scale telecommunications company. We also analyzed the effect of confirmation bias during software testing phase. Our results showed that there is a direct correlation between confirmation bias and defect proneness of the code
Towards a Metric Suite Proposal to Quantify Confirmation Biases of Developers
The goal of software metrics is the identification and measurement of the essential parameters that affect software development. Metrics can be used to improve software quality and productivity. Existing metrics in the literature are mostly product or process related. However, thought processes of people have a significant impact on software quality as software is designed, implemented and tested by people. Therefore, in defining new metrics, we need to take into account human cognitive aspects. Our research aims to address this need through the proposal of a new metric scheme to quantify a specific human cognitive aspect, namely "confirmation bias". In our previous research, in order to quantify confirmation bias, we defined a methodology to measure confirmation biases of people. In this research, we propose a metric suite that would be used by practitioners during daily decision making. Our proposed metric set consists of six metrics with a theoretical basis in cognitive psychology and measurement theory. Empirical sample of these metrics are collected from two software companies that are specialized in two different domains in order to demonstrate their feasibility. We suggest ways in which practitioners may use these metrics to improve software development process